SELECT id, name, string_value FROM rhq_config_property WHERE configuration_id = (SELECT configuration_id FROM rhq_subject WHERE name = 'rhqadmin')
I also have a blog for your reading pleasure.
Here's a SQL to execute to get the user preferences for a user; eg. to get rhqadmin's prefs:
SELECT id, name, string_value FROM rhq_config_property WHERE configuration_id = (SELECT configuration_id FROM rhq_subject WHERE name = 'rhqadmin')
Here's some SQL to get the partition table (the agents' failover lists):
select s.name as server, a.name as agent from rhq_server s, rhq_agent a, rhq_failover_list fl, rhq_failover_details fd where a.id = fl.agent_id AND fd.ordinal=0 AND fd.failover_list_id=fl.id AND fd.server_id=s.id order by s.name, a.name
To build a distro (of RHQ and Jopr)
Do a full RHQ distro build within rhq/trunk via mvn -Penterprise,dist -Dmaven.test.skip install
Now build Jopr within jopr/trunk via mvn -Pdist -Dmaven.test.skip install
Jopr distros are found here: jopr/trunk/modules/dist
Here's some JPQL/SQL for the admin/hibernate.jsp page:
OOBs For Definitions Across all resources of a type
select count(oob), sched.definition.id, sched.definition.resourceType.name from MeasurementOutOfBounds oob join oob.schedule sched group by sched.definition.id, sched.definition.resourceType.name order by count(oob) desc
OOBs for individual resources/metrics
select count(oob) as count, res.id as resource_id, sched.id as schedule_id, res.name as resource_name, sched.definition.name as metric_name from MeasurementOutOfBounds oob join oob.schedule sched join sched.resource res group by res.id, sched.id, res.name, sched.definition.name order by count(oob) desc
select count(measuremen0_.id) as col_0_0_, resource2_.ID as col_1_0_, measuremen1_.id as col_2_0_, resource2_.NAME as col_3_0_, measuremen3_.name as col_4_0_ from RHQ_MEASUREMENT_OOB measuremen0_ inner join RHQ_MEASUREMENT_SCHED measuremen1_ on measuremen0_.SCHEDULE_ID=measuremen1_.id inner join RHQ_RESOURCE resource2_ on measuremen1_.RESOURCE_ID=resource2_.ID, RHQ_MEASUREMENT_DEF measuremen3_ where measuremen1_.DEFINITION=measuremen3_.ID group by resource2_.ID , measuremen1_.id , resource2_.NAME , measuremen3_.name order by count(measuremen0_.id) desc
If you want to test connectivity to a server or agent:
cd jbossas/server/default/deploy/rhq.ear/lib
java -Dlog4j.configuration=file:../../../conf/jboss-log4j.xml -cp rhq-enterprise-comm*.jar;rhq-core-util*.jar: jboss-common*.jar;i18nlog*.jar: jboss-jmx*.jar: jboss-remoting*.jar: jboss-serialization*.jar: getopt*.jar;concurrent*.jar: ../../../lib\log4j.jar org.rhq.enterprise.communications.command.client.CmdlineClient -u <comm-endpoint-url> -c identify
To ping a server, the -u value should be servlet://server-host:7080/jboss-remoting-servlet-invoker/ServerInvokerServlet
To ping an agent, the -u value should be socket://agent-host:16163
Help Docs
git man pages home: http://www.kernel.org/pub/software/scm/git/docs/git.html
git user manual: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html
Config
Setup global user info in ~/.gitconfig for commit details
git config --global user.name "First Last" git config --global user.email "email@abc.com"
git config --global branch.autosetuprebase always to always do a rebase
git config branch.<branch-name>.rebase true to always rebase an existing branch
core.autocrlf=input makes sure you checkin text files with LF EOL chars (strips CR)
diff.external=/home/mazz/opt/git/diff.py where diff.py launches Meld, provides a nice diff GUI as opposed to reading diff files.
#!/usr/bin/python import sys import os os.system('meld "%s" "%s"' % (sys.argv[2], sys.argv[5]))
git config help: http://www.kernel.org/pub/software/scm/git/docs/git-config.html
Tracking files
git tracks files you need to commit - it stores this information in the "index"
Add all files, recursing down subdirectories, to the index. This effectively stages the files that can now be committed.
git add .
Get information on what files are ready to be commited and what files are "unstaged" (aka not yet 'git added')
git status
Cloning / Remote Repositories
Clone a local working copy to a bare .git directory repo
git clone --bare /path/to/local/repo repo.git
Clone a remote git repo so you can begin working on it
git clone <remote git URL>
To revert a change back to what was checked in (i.e. back out modifications to a working copy)
git checkout -- <files to revert>
To store the repo on a remote mirror
First you must create a remote "mirror" definition on your local repo (you must cd to the repo root):
git remote add your-remote-id username@hostname:/your/location/to/repo.git
Now after you commit things to your local repo and you want to push up to the remote, you do this:
git push --mirror your-remote-id
Host your own local repo so others can pull from it (the repo directory is nothing more than a remote file system)
git daemon - starts a git server on your box. pass --user-path to allow for ~user notation
Or you can just put your repo on an HTTP server (like symlink the directory where .git is in your htdocs)
Or you can just open up the repo via ssh
Make sure you run "git update-server-info" so your repo has necessary metadata written for it
Once you have remoted the repo, another person can pull it down:
git remote add mazz http://my-host-name/~myuser/my-repo/.git/ git fetch mazz git checkout --track -b branchName mazz/branchName git pull mazz branchName (note - this probably isn't needed) git push origin branchName:refs/heads/branchName (push mazz branch to another remote repo "origin")
GUI Tools
To peruse the repo, branches and working copy, use the GUI tool
git gui
The GUI tool has menu options to get to the gitk tool, but you can get it by gitk also
.gitignore
To ignore files, you can use .gitignore.
Patterns are read from a .gitignore file in the same directory as the path, or in any parent directory, with patterns in the higher level files (up to the toplevel of the work tree) being overridden by those in lower level files down to the directory containing the file.
These patterns match relative to the location of the .gitignore file.
Logs
To get a log of history
git log -—pretty=<oneline|short|medium|full|fuller|email|raw|format:string>
There are other options to the git log command that let you filter the results
--since="1 month ago" --until=yesterday --author="mazz"
Tons more options, including diff history. See http://www.kernel.org/pub/software/scm/git/docs/git-log.html
Diff
git diff is how you generate patch files and look at diffs. See http://www.kernel.org/pub/software/scm/git/docs/git-diff.html
If you want to see what you would be committing (i.e. the diff between HEAD and the index):
git diff --cached
What is different between the index and the local working copy files
git diff
git diff-files --name-status tells you just what files are changed
git whatchanged -p shows a patch diff of past history
git reflog shows you when the tip of the branch changed
git diff origin/master..master tells what I changed from the remote repo
If you setup an external diff viewer, this provides a GUI editor of the diff
gitk origin/master..master will show the diff in gitk
git diff --no-ext-diff disables the external diff viewer
If you have two branches A and B, both started at a same common commit, but since then both have diverged differently, you can use the "triple dot" notation to see how they both diverged:
gitk A...B
If I don't want to pull from the remote, but I want to see what changed up there:
git fetch origin <branch>
git diff --name-only origin/<branch>
git diff origin/<branch>
Checkout
To checkout an old version (this example gets the one from the previous commit):
git checkout HEAD^ path/to/file
git show HEAD^ path/to/file just shows the file, but doesn't check it out
Patches
Create a patch for all commits that are different from origin that can be emailed
git format-patch --stdout origin
Create a patch for the last commit just performed on the current branch
git format-patch --stdout HEAD^1..HEAD
Apply a series of patches from emails
cat ...output from format-patch... | git am
Apply a diff to working copy and index (not committed)
git apply ...diff.patch...
Send the patch via email
git format-patch --stdout origin > my.patch git send-email my.patch
For this to work, your .gitconfig has to have SMTP setup
[sendemail] smtpserver=your.smtp.hostname
Branching
To see what local branches are in the repo, and which one your working copy is currently on:
git branch
To create a new branch
git branch <name of new branch>
To switch to a branch
git checkout <name of branch>
How to checkout a "tracking" branch that tracks a remote branch which you can push to/pull from
git checkout --track -b branch_name origin/branch_name
If you already have a local branch but you want to push it up to the remote repo
git push origin the-name-of-your-local-branch
How you can create your own remote branch and begin working in it
git push origin master:refs/heads/experimental Create the branch experimental in the origin repository by copying the current master branch. This form is only needed to create a new branch or tag in the remote repository when the local name and the remote name are different; otherwise, the ref name on its own will work.
git push origin origin/<branch to branch off of>:refs/heads/<new branch name> git fetch origin git checkout --track -b foo origin/<new branch name> git pull
To diff between two branches
git diff --stat <branch1> <branch2>
To delete a branch (usually done after you merge into master and no longer need the branch)
git branch -d <name of doomed branch>
To delete a remote branch (dangerous!)
git push origin :doomed-branch-name
To merge a branch, you checkout the one branch, and ask to merge the second
git merge <name of branch to merge into working copy>
git reset allows you to revert back if a merge has too many conflicts
Rebasing means you just move the base of your working copy to a given branch
You must have everything added/committed to your index
git rebase <branch to base the working copy on>
If you are working on something and don't want to commit to your branch, but need to switch to another branch, you can stash your changes. After you switch to the other branch, finish doing what you need and switch back, you can apply your stash getting back to the way you were before
git stash
git stash apply
You can see what you stashed using the options "list" and "show" to git stash
You can peek at what a remote repo changed without merging first, using the "fetch" command; this allows you to inspect what someone else did, using a special symbol "FETCH_HEAD", in order to determine if there is anything worth pulling, like this:
git fetch /home/bob/myrepo master git log -p HEAD..FETCH_HEAD
This operation is safe even if you have uncommitted local changes. The range notation "HEAD..FETCH_HEAD" means "show everything that is reachable from the FETCH_HEAD but exclude anything that is reachable from HEAD".
If you want to visualize this: gitk HEAD..FETCH_HEAD
If you want to push a small subset of commits excluding others you might also have:
git checkout --track -b master-copy origin/master (create a local branch that clones master)
git pull --rebase (paranoia, just make sure you have everything up to date)
git cherry-pick SHA (cherry pick over things you want to push out)
git push origin HEAD:master (pushes the current branch to the remote ref matching master in the origin repository. This form is convenient to push the current branch without thinking about its local name)
Tagging
You can tag a branch, usually to denote a release
git tag -a <tag name>
TODO: how do you switch to or checkout a tag???
Bisecting
Use "git bisect" to hunt down when a bug was introduced in the code base.
You bisect the code, test and tell git if the test was good or bad. git helps track down the commit that caused the error.
http://www.kernel.org/pub/software/scm/git/docs/git-bisect.html
Maintenance
You need to garbage collect the metadata periodically via git gc
Use git fsck to see if you have unreachable or corrupted objects
Use git prune to clean up problems that fsck finds
git update-index --refresh after you copy a repo to make sure the index is OK
To delete local branches that have been removed on the remote repo: git remote prune origin
Running JPDA debugging from command line:
jdb -connect com.sun.jdi.SocketAttach:port=8787
jdb -listconnectors will show you the different transports and arguments you can use
http://java.sun.com/j2se/1.5.0/docs/tooldocs/windows/jdb.html
To run a VM with JMX remoting enabled without authentication, you must pass in the following environment variables:
-Dcom.sun.management.jmxremote.port=19988
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
To run a VM with JMX remoting enabled with password authentication:
-Dcom.sun.management.jmxremote.port=19988
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=true
-Dcom.sun.management.jmxremote.password.file=/some/directory/jmxremote.password
Note that "jmxremote.password" must be read-only. On Windows, you must use "cacls" command to do this: cacls /some/directory/jmxremote.password /P username:R
A password file template is located at $JRE_HOME/lib/management/jmxremote.password.template.
There is also an auth file that you can use to define other roles.
For more information on setting this up and setting up SSL, see http://java.sun.com/j2se/1.5.0/docs/guide/management/agent.html.
To run JBossAS 4 with JMX remoting enabled and with platform MBeans exposed via JNP too, pass in these extra variables:
-Djboss.platform.mbeanserver
-Djavax.management.builder.initial=org.jboss.system.server.jmx.MBeanServerBuilderImpl
To get stats, like processes and sessions: select * from V$RESOURCE_LIMIT
To enable recovery on Oracle, the user must have permissions such as these
grant select on sys.dba_pending_transactions to username; grant select on sys.pending_trans$ to username; grant select on sys.dba_2pc_pending to username; grant execute on sys.dbms_system to username; grant select on v$xatrans$ to username; grant execute on dbms_system to username;
Normally, Postgres is installed to only allow clients on the localhost to connect to the server:
host all all 127.0.0.1/32 md5
To allow clients found on your subnet to connect, you can add this line:
host all all 192.168.0.0/16 md5
I wrote some simple Windows batch scripts that I use when testing on my box.
Here's the page on what scripts to run for the agentspawn stuff: Design-AgentSpawn
Here is a test script used to run the agentspawn stuff on multiple boxes that have an NFS directory on mulitple servers. Create an "agents" directory and put the following script in it, named runspawn.sh:
#!/bin/sh if [ "$2" == "" ]; then echo "Syntax: $0 <hostname> <copy|start|stop|clean|help>" exit 1; fi echo Copying $1 agentspawn.properties file cp properties-files/agentspawn.properties.$1 agentspawn/src/scripts/agentspawn.properties echo Running ant $2 cd agentspawn/src/scripts ant $2
Checkout or copy the agentspawn module in here. Then create a directory called properties-files and in there put copies of agentspawn.properties in there (name them agentspawn.properties.<hostname>. Each of those properties files can be customized for the agents that will be spawned on that host (you probably will want to change the vm.dir and the starting ports). Now you can run that script, giving it the hostname as the first argument and the ant target as the second. Here's what the file/directory layout should be:
__agents |__runspawn.sh |__properties-files |__agentspawn.properties.<hostname> (multiple ones of these) |__agentspawn |__src |__scripts
(note: The agentspawn is the full contents of the /etc/agentspawn module, including the /target binaries after you "mvn install" it).
Here's test scripts I use to prepare multiple server distros when each box has the same NFS mount. You have to first build the software, e.g.:
1. cd rhq/trunk 2. mvn -Pdev,enterprise,jon05-oracle install 3. cd jon/trunk 4. mvn -Pdev install
This produces "rhq/trunk/dev-container". Now you can run copy-servers.sh to copy the distros for the different hosts. delete-servers.sh will purge all the distros. run-server.sh will run the proper distro, assuming your server distros are put into directories with the same name as their host (i.e. "hostname -s").
echo Making server directories to host the software for all servers mkdir jon01 mkdir jon02 mkdir jon03 mkdir jon05 echo Copying the jon01 container cp -R ~/source-code/rhq/trunk/dev-container/* ./jon01 echo Copying the jon02 container cp -R ~/source-code/rhq/trunk/dev-container/* ./jon02 echo Copying the jon03 container cp -R ~/source-code/rhq/trunk/dev-container/* ./jon03 echo Copying the jon05 container cp -R ~/source-code/rhq/trunk/dev-container/* ./jon05 echo Copying the server configuration file to all servers cp rhq-server.properties.jon01 ./jon01/bin/rhq-server.properties cp rhq-server.properties.jon02 ./jon02/bin/rhq-server.properties cp rhq-server.properties.jon03 ./jon03/bin/rhq-server.properties cp rhq-server.properties.jon05 ./jon05/bin/rhq-server.properties echo Copying the perftest plugin if [ -f ./rhq-perftest-plugin-2.1.0-SNAPSHOT.jar ]; then cp ./ rhq-perftest-plugin-2.1.0-SNAPSHOT.jar ./jon01/jbossas/*/rhq-downloads/rhq-plugins cp ./ rhq-perftest-plugin-2.1.0-SNAPSHOT.jar ./jon02/jbossas/*/rhq-downloads/rhq-plugins cp ./ rhq-perftest-plugin-2.1.0-SNAPSHOT.jar ./jon03/jbossas/*/rhq-downloads/rhq-plugins cp ./ rhq-perftest-plugin-2.1.0-SNAPSHOT.jar ./jon05/jbossas/*/rhq-downloads/rhq-plugins else echo PERFTEST PLUGIN NOT FOUND HERE - IT WILL NOT BE DEPLOYED fi echo Done.
echo Removing all server directories rm -rf jon01 rm -rf jon02 rm -rf jon03 rm -rf jon05 echo Done.
#!/bin/sh _host=`hostname -s` cd ${_host}/bin ./rhq-server.sh $*
To deploy an artifact to your local maven repo, see http://maven.apache.org/plugins/maven-deploy-plugin/usage.html
mvn deploy:deploy-file -Durl=file://C:\m2-repo \ -DrepositoryId=some.id \ -Dfile=your-artifact-1.0.jar \ [-DpomFile=your-pom.xml] \ [-DgroupId=org.some.group] \ [-DartifactId=your-artifact] \ [-Dversion=1.0] \ [-Dpackaging=jar] \ [-Dclassifier=test] \ [-DgeneratePom=true] \ [-DgeneratePom.description="My Project Description"] \ [-DrepositoryLayout=legacy] \ [-DuniqueVersion=false]
Here's some useful tidbits regarding SVN usage.
To find out what was checked into a branch since the branch was created:
svn log --stop-on-copy
If you committed with proper log messages (i.e. include the JIRA number in the log message), you can easily see what was committed to the branch.
To rollback a particular revision, execute the following (where ### is the revision you want to back out, note the - (dash) prefixing the number - this is required):
svn merge -c -### <svn http url>
You can then svn status and svn diff to make sure its all correct. Then svn commit to checkin the changes, which effectively rolls back that revision. Read http://svnbook.red-bean.com/en/1.0/ch04s04.html and http://svnbook.red-bean.com/en/1.0/ch04s03.html for information about merging and branches.
To merge in a change from trunk into a branch, or vice versa (from a branch to a trunk), or even from one branch to another branch, you just do the same "svn merge -c" command from above, only you don't put a minus sign in front of the rev. First, you need to make sure the destination branch/trunk is checked out and is your current working copy. Then you basically say, "I want to merge into my working copy the changes introduced by revision ### of <svn URL>:
svn merge -c ### <svn URL>
Note that the working copy where your current directory is will get the newly merged code. You then commit your working copy.
Suppose we build a GA distribution from trunk (svn rev 100) and create a tag off of svn rev 100 called 1.0.GA. We've since moved forward on trunk with lots of new commits - we are up to svn rev 300 now. Now suppose we need to patch the 1.0.GA product. The fix for the patch was already committed to trunk, it was committed in svn rev 150. How can we apply the fix to the 1.0.GA.
First, create a branch based on the 1.0.GA tag: svn copy http://server/product/tags/1.0.GA http://server/product/branches/1.1
Checkout the branch so you can work on a local copy: svn co http://server/product/branches/1.1
Now merge that fix's svn rev into the local branch working copy: svn merge -r 149:150 http://server/product/trunk
Note that because its a single rev, we could have used "-c 150" instead of "-r 149:150"
Do a svn diff to double check you did it right and picked up the changes you need
Finally, checkin the fix: svn commit
At this point, your branch is now 1.0.GA with the patch and only that patch.
Do get a file that was deleted from SVN, you have to use svn log --verbose to find out which revision deleted it, then you svn up -r <rev minus one> file.txt to retrieve the file. Note that this means you need to have some working copy on your file system to do the svn log and svn up commands using these commands. You might be able to pass URLs to the commands if you don't have a working copy.
int[] levels = {10,1,83,8,6}; int leveltotal = levels[0]; int sum = leveltotal; System.out.println("Level #1=" + leveltotal); for (int i=1; i<levels.length; i++) { leveltotal *= levels[i]; sum += leveltotal; System.out.println("Level #" + (i+1) + "=" + leveltotal); } System.out.println("Total=" + sum);
find . -name target -print | xargs rm -rf
Joins can be thought of in the context of a Venn diagram. You have two circles - A and B. They overlap in some parts.
The overlap is your INNER join - only the elements in A that also have associated B elements.
The A circle is the LEFT join - all A elements regardless of whether they have associated B elements.
The B circle is the RIGHT join - all B elements regardless of whether they have associated A elements.
If you have a set of UNIX machines and their clocks are not NTP-synced, use the "date" command to set their clocks to within a second or two to each other. To get the time of one machine in the format that "date" needs to set the clock, run this:
echo date `date +%m%d%H%M%y.%S`
Take the result of that and run it on the rest of the machines.
If you want to round something to thousands (like 103023 to 103000 or 103998 to 104000):
long l = (long) ((number / 1000.0) + 0.5) * 1000;
More generically, replace 1000.0 with the specific size you want:
long l = (long) ((time / specifiedLimit) + 0.5) * specifiedLimit;
long newValue = ...; long count = currentCountOfItems++; // must be the count of all items currently currentAvg = (((count - 1) * currentAvg) + newValue) / count;
If you aren't sure a box has multicast enabled, run the following JGroups test app. This will run the receiver - it will print to stdout when it hears a multicast:
java -cp jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555 -bind_addr 0.0.0.0
Now run the sender - when this starts, enter some text and hit enter to broadcast the message to all receivers:
java -cp jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555 -bind_addr 0.0.0.0
Play with -bind_addr to see what NICs support multicasting.
Attached is a standalone Hibernate/JPA application based off of the Hibernate/JPA Hello World sample app. Build it with "ant package", then run the executable jar via "java -jar build/helloworld.jar". Type "help" at the prompt for the commands you can use to insert, delete and query the database. The database is Hypersonic using its in-memory feature - no database files or data will be persisted. There are two Hibernate/JPA entities - hello.Message and hello.Person. You can insert, delete and query them via the "add", "delete" and "get" commands. Note that if you want to use the RHQ hibernate plugin to manage it via jmx remoting, you have to set some system properties to configure the VM:
java -Dcom.sun.management.jmxremote.port=1999 \ -Dcom.sun.management.jmxremote.authenticate=false \ -Dcom.sun.management.jmxremote.ssl=false -jar helloworld.jar
Attached is a very small sample JMX App. It contains a prebuilt jmxapp.jar plus the source and an ant script to build the jar. It has a "runjmxapp" script that you can use as the command to start the JMX app. All it does is register a few MBeans to the platform MBeanServer and wait until the VM is killed. One MBean's name is a hardcoded static name, and the others have names that are partially determined at runtime. Use this when you need to test connectivity to a remote JMX MBeanServer.
Attached is a small executable jar that converts epoch milliseconds to date strings. To can convert in either direction (epoch millis to date or vice versa). Enter java -jar epochmillis.jar for usage help. You can put more than one argument on the command line to convert more than one value:
java -jar epochmillis.jar 1133467812345 "Feb 12, 2003 10:10:10 PM" 1174567834560 1133467812345=Thu Dec 01 15:10:12 EST 2005 1045105810000=Wed Feb 12 22:10:10 EST 2003 1174567834560=Thu Mar 22 08:50:34 EDT 2007
START_INSTANCES=${1:-1} END_INSTANCES=${2:-30} export JBOSS_HOME=${HOME}/opt/jbossas/jboss-eap-4.3/jboss-as export LOGS_DIR=${JBOSS_HOME}/logs mkdir -p ${LOGS_DIR} cd ${JBOSS_HOME}/bin for (( i = ${START_INSTANCES} ; i <= ${END_INSTANCES}; i++ )); do if [ ! -d ${JBOSS_HOME}/server/config${i} ]; then echo "Creating new config dir '${JBOSS_HOME}/server/config${i}'" cp -pr ${JBOSS_HOME}/server/default ${JBOSS_HOME}/server/config${i} fi echo "Starting instance #${i} bound to 127.0.0.${i}..." nohup ./run.sh -c config${i} -b 127.0.0.${i} >${LOGS_DIR}/config${i}.log 2>&1 & sleep 2 done
Ran into an issue where the section on the main dashboard page called Recently Updated was not updating. To fix this, go to Administration>Content Indexing and rebuild the search index.
Periodically, the recovery manager will ask the resource (e.g. the database), "what transactions do YOU know of that are pending and need to be recovered?". This is because the database also has the ability to mark a transaction as "prepared and waiting for a conclusion". It asks this independently of what the resource manager itself thinks is pending - i.e. the RM tries to recovery both transactions it knows of in its object store (top-down recovery) as well as transactions the database knows of (bottom-up recovery). It needs an XAResource to ask this. This is why in Oracle you need to grant some select permissions to the datasource's user - because the database's pending transactions are stored in special dba tables and its those tables the RM looks at to determine what is pending in the database.
There is a way to plug in a custom object store implementation. The object store stores transaction logs that include information about transactions that are in the 2PC stage. Out of box impls include database object store and filesystem object store (which is used out-of-box of JBossAS, and its stored in data/tx-object-store). You can write an in-memory store implementation - but you have to make sure we only store a finite size of data to avoid OOM or just throw the logs away (but if we do that, why bother? just don't use XA). There is no config prop to say, "don't log" - you have to write your own object store impl to do that.
1PC is on by default if there is a single resource enlisted in a transaction (no prepare phase), even if its XA. NO LOGS ARE WRITTEN when a single resource is enlisted.
If two <local-tx-datasource> resources are enlisted in a transaction, does JBossTM still try to do "pseudo-XA" things with it? See https://jira.jboss.org/jira/browse/JBTM-443
JBossMQ has its own XA implementation (which is why it has JMS_TRANSACTIONS table). We can't disable that AFAIK. Therefore, it provides its own XAResource which itself requires a JBossTM recovery module. See relevant JIRAS:https://jira.jboss.org/jira/browse/JBTM-279https://jira.jboss.org/jira/browse/JBAS-5502
Performance: XA vs. Local-Tx - there are additional round trips in XA - the 2PC protocol has the prepare, et. al. phases that introduce about 3 to 4 additional round trips every transaction.
The # of resources enlisted in a transaction matter, not what type the resources are (XA va. non-XA). If 2+ resources are enlisted, a log is written out and 2PC is attempted. If one is a local-tx, then a wrapper around its resource is provided that "simulate" an XA resource - but during the 2PC prepare phase, the prepare is a no-op in that wrapper. Note that when 2+ resources are enlisted, logs are written to the object store, even if they aren't XA.
"SELECT formatid, globalid, branchid FROM SYS.DBA_PENDING_TRANSACTIONS" is executed every X seconds (defined by com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod which defaults to 120).
com.arjuna.ats.arjuna.coordinator.txReaperTimeout (set to 120000) is for tx timeouts. As per "jhalliday", "actually its on-demand now, although periodic is still a config option. it wakes up, looks for live tx that have reached their allotted max lifetime and rolls them back. it actually won't touch ones that have been prepared, only the ones that have not reached that stage yet."
If the recovery fails, the resource manager continues to attempt to recover (unless a very specific set of errors are seen, in which case the RM knows the transaction is lost forever and no longer attempts it). You should be able to configure an expiration time - when this times out, the expiration code will remove the tx logs from the object store and stop trying to recover that tx. There is a bug currently in JBossTM in which that expiration never happens: https://jira.jboss.org/jira/browse/JBTM-418
Find out how to enable debug to trace 2PC and object store access: http://www.jboss.com/index.html?module=bb&op=viewtopic&t=147697
/etc/sysconfig/networking/profiles/default contains network settings
hosts - default hosts, when you switch connections, this overwrites /etc/hosts
ifcfg-eth0 - default settings when you switch to this device. I set PEERDNS=no which seems to tell network manager that I don't want my router to be considered a DNS server (PEERDNS=yes added 192.168.0.1 to my /etc/resolve.conf, which I did not want)
If some network connections but not all, and everything else is eliminated, try to lower the MTU setting on the network interface (I had to do this on a virtual machine guest) :
ifconfig eth0 mtu 1000
The "thread-max" kernel setting
To find out what the current max threads limit is: sysctl -a | grep threads
change the value on-the-fly with: echo [new value] > /proc/sys/kernel/threads-max
make the setting permanent by adding "kernel.threads-max = [new value]" in /etc/sysctl.conf file and then issue the "sysctl -p" command to load it.
Mount Windows Share to Linux
Samba has to be installed
Test if Linux machine sees the shares on the Windows machine: smbclient -L <windows-host> -U <username>
Make directory for the mountpoint: mkdir /mnt/<name-of-mountpoint>
Mount the share:
mount -t cifs -o username=<username>,password=<password> //<windows-host>/<share> /mnt/<name-of-mountpoint>
Note: this saves the password
Create a symlink to the mounted drive: ln -s /mnt/<name-of-mountpoint> /<path-of-symlink>
To run an X program on a server but display on local machine:
First, local machine must allow for it - use xhost such as: xhost +
ssh into the remote machine, use the -X option to enable X forwarding: ssh -X hostname
Make sure DISPLAY environment variable is set correctly: export DISPLAY=localhostname:0.0
Now you can run the X program on the hostname and have it display on localhostname
Burn video to DVD
first convert the video with ffmpeg:
ffmpeg -i input.m4v -target ntsc-dvd output.mpg
now do the authoring
dvdauthor --title -o dvd -f output.mpg dvdauthor -o dvd -T
NOTE: --title sets the title of the DVD, -T sets the table of contents. In both
above commands the -o switch is referencing a directory, NOT the actual dvd.
roll the .mpg file into an ISO file
mkisofs -dvd-video -o dvdimage.iso dvd
NOTE: mkisofs is making an actual DVD video ISO file using the directory, dvd.
burn the ISO to DVD disc
growisofs -speed=1 -dvd-compat -Z /dev/dvd=dvdimage.iso
NOTE: -speed=1 is for use with lower quality discs, increase as necessary
How to authorize a normal user with root access using sudo
Boot into single user mode
Restart the box, press F12 (or whatever) to get to the GRUB menu
You want to edit the kernel entry - append "single" to the end of the command
Continue to boot - this gets you into single user mode as root
Now that you are root, you can edit /etc/sudoers so it has this line in it
normalUserName ALL=(ALL) ALL
Now shutdown the box and restart it normally. The normalUserName can "sudo" now
While in a room, "/mode +i" turns on "invitation only" so only invitees can join
You can forward from one room to another so when someone joins room A they automatically get forwarded to room B
You must have op perms for both rooms A and B
You should set room A to "invite-only"
You should turn on the guard of room A: /msg ChanServ #A guard on
Turn on forwarding - enter this while in room A: /mode +f #B
Kick off all users if room A is no longer to be used using /kick <nick>
See http://developer.pidgin.im/wiki/Protocol%20Specific%20Questions#IRCProtocol for tips on using buddy pounces when logging into a server and you need to send a message to a userserv or chanserv.
How do I identify myself with a buddy pounce?
Enable your IRC account.
Add the nick of the user or bot to your buddy list
Right-click the new buddy and click "Add Buddy Pounce" to create a new pounce
Make sure "Signs on" is the only checked box in the "Pounce When Buddy..." section
Make sure "Send a message" is checked under "Action"
Enter the message, such as: identify mypassword
Make sure "Recurring" is checked beneath "Options" or the pounce will work only once
Click Save to save the pounce. Note that you do not include '/msg nickname' as part of the message in the pounce.
If I fail to login or identify with the message like "nick is unavailable", go to the NickServ and say "release <nick> <password>". You may have to do it more than once.
Here are some Firefox plugins that I've found useful.
Web Development plugins
Firebug - Web Development Tools
Web Developer - Web Development Tools
HTML Validator - Validates HTML pages
Poster - HTTP/REST requester
Miscellaneous
Forecastfox - Weather forecasts
TableTools - Sorts tables
OldBar - Returns address bar back to Firefox 2 behavior
Screengrab - Capture web pages as screen snapshot images
Its All Text - Edit textareas in external editor
Here are some Thunderbird plugins that I've found useful.
Colored Diffs - Graphically displays .patch content of commit emails
Lightning - Calendar
LDAP Server: https://www.opends.org/
LDAP Client: http://www.mcs.anl.gov/~gawor/ldap/ or http://ldapmanager.org
Java Decompiler: http://java.decompiler.free.fr
Defragmentation Tools for Windows
Java Preferences Viewer: http://javaprefs.googlepages.com/
yum repo: http://www.mongodb.org/display/DOCS/CentOS+and+Fedora+Packages
yum install command: yum install mongo-10gen mongo-10gen-server
mongo command line: "mongo" then at the prompt you can: "use rhq" and "db.collectionName.find()"